Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add filters

Journal
Year range
1.
SpringerBriefs in Applied Sciences and Technology ; : 27-34, 2023.
Article in English | Scopus | ID: covidwho-2322938

ABSTRACT

In order to repurpose currently available therapeutics for novel diseases, druggable targets have to be identified and matched with small molecules. In the case of a public health emergency, such as the ongoing coronavirus disease 2019 (COVID-19) pandemic, this identification needs to be accomplished quickly to support the rapid initiation of effective treatments to minimize casualties. The utilization of supercomputers, or more generally High-Performance Computing (HPC) facilities, to accelerate drug design is well established, but when the pandemic emerged in early 2020, it was necessary to activate a process of urgent computing, i.e., prioritized and immediate access to the most powerful computing resources available. Thanks to the close collaboration of the partners in the HPC activity, it was possible to rapidly deploy an urgent computing infrastructure of world-class supercomputers, massive cloud storage, efficient simulation software, and analysis tools. With this infrastructure, the project team performed very long molecular dynamics simulations and extreme-scale virtual drug screening experiments, eventually identifying molecules with potential antiviral activity. In conclusion, the EXaSCale smArt pLatform Against paThogEns for CoronaVirus (EXSCALATE4CoV) project successfully brought together Italian computing resources to help identify effective drugs to stop the spread of the SARS-CoV-2 virus. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

2.
Computing ; 105(4):811-830, 2023.
Article in English | Academic Search Complete | ID: covidwho-2266159

ABSTRACT

The world has changed dramatically since the outbreak of COVID-19 pandemic. This has not only affected the humanity, but has also badly damaged the world's socio-economic system. Currently, people are looking for a magical solution to overcome this pandemic. Similarly, scientists across the globe are working to find remedies to overcome this challenge. The role of technologies is not far behind in this situation, which attracts many sectors from government agencies to medical practitioners, and market analysts. This is quite true that in a few months of time, scientists, researchers, and industrialists have come up with some acceptable innovative solutions and harnessing existing technologies to stop the spread of COVID-19. Therefore, it is pertinent to highlight the role of intelligent technologies, which play a pivotal role in curbing this pandemic. In this paper, we devise a taxonomy related to the technologies being used in the current pandemic. We show that the most prominent technologies are artificial intelligence, machine learning, cloud computing, big data analytics, and blockchain. Moreover, we highlight some key open challenges, which technologists might face to control this outbreak. Finally, we conclude that to impede this pandemic, a collective effort is required from different professionals in support of using existing and new technologies. Finally, we conclude that to stop this pandemic, machine learning approaches with integration of cloud computing using high performance computing could provision the pandemic with minimum cost and time. [ FROM AUTHOR] Copyright of Computing is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

3.
International Journal of Electrical and Computer Engineering ; 13(3):3111-3123, 2023.
Article in English | Scopus | ID: covidwho-2256269

ABSTRACT

The exponentially increasing bioinformatics data raised a new problem: the computation time length. The amount of data that needs to be processed is not matched by an increase in hardware performance, so it burdens researchers on computation time, especially on drug-target interaction prediction, where the computational complexity is exponential. One of the focuses of high-performance computing research is the utilization of the graphics processing unit (GPU) to perform multiple computations in parallel. This study aims to see how well the GPU performs when used for deep learning problems to predict drug-target interactions. This study used the gold-standard data in drug-target interaction (DTI) and the coronavirus disease (COVID-19) dataset. The stages of this research are data acquisition, data preprocessing, model building, hyperparameter tuning, performance evaluation and COVID-19 dataset testing. The results of this study indicate that the use of GPU in deep learning models can speed up the training process by 100 times. In addition, the hyperparameter tuning process is also greatly helped by the presence of the GPU because it can make the process up to 55 times faster. When tested using the COVID-19 dataset, the model showed good performance with 76% accuracy, 74% F-measure and a speed-up value of 179. © 2023 Institute of Advanced Engineering and Science. All rights reserved.

4.
International Journal of High Performance Computing Applications ; 37(1):46478.0, 2023.
Article in English | Scopus | ID: covidwho-2239171

ABSTRACT

This paper describes an integrated, data-driven operational pipeline based on national agent-based models to support federal and state-level pandemic planning and response. The pipeline consists of (i) an automatic semantic-aware scheduling method that coordinates jobs across two separate high performance computing systems;(ii) a data pipeline to collect, integrate and organize national and county-level disaggregated data for initialization and post-simulation analysis;(iii) a digital twin of national social contact networks made up of 288 Million individuals and 12.6 Billion time-varying interactions covering the US states and DC;(iv) an extension of a parallel agent-based simulation model to study epidemic dynamics and associated interventions. This pipeline can run 400 replicates of national runs in less than 33 h, and reduces the need for human intervention, resulting in faster turnaround times and higher reliability and accuracy of the results. Scientifically, the work has led to significant advances in real-time epidemic sciences. © The Author(s) 2022.

5.
IEEE Transactions on Parallel and Distributed Systems ; : 2015/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2232135

ABSTRACT

Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE

6.
2022 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2022 ; : 2595-2602, 2022.
Article in English | Scopus | ID: covidwho-2223065

ABSTRACT

Contemporary drug discovery relies heavily on massive high performance computing (HPC) resources from docking and molecular dynamics simulations of proteins interacting with drug candidate ligands, such as recently published form simulations on ORNL's SUMMIT in covid19 pharmaceutical research. This work presents a unique spectral analysis approach using wavelet transform (WT) to understand the correlation between the time evolution of protein conformations generated by molecular dynamics and specific protein conformations that are selected for binding by ligands. A DWT-based spectral analysis is performed on the unique protein descriptors previously identified to be important in protein: ligand binding. The new protein time-series information from the wavelet-based time-frequency domain analysis is used for a more refined protein conformation selection and improve the deep learning and machine learning (AI/ML) prediction framework to improve the prediction of binding vs. non-binding protein conformations for three target proteins ADORA2A, OPRD1 and OPRK1. In this work, wavelets are used for spectral analysis for their added benefit of simultaneous time-frequency resolution and denoising properties. © 2022 IEEE.

7.
Diagnostics (Basel) ; 13(3)2023 Jan 20.
Article in English | MEDLINE | ID: covidwho-2199882

ABSTRACT

The COVID-19 pandemic shed light on the need for quick diagnosis tools in healthcare, leading to the development of several algorithmic models for disease detection. Though these models are relatively easy to build, their training requires a lot of data, storage, and resources, which may not be available for use by medical institutions or could be beyond the skillset of the people who most need these tools. This paper describes a data analysis and machine learning platform that takes advantage of high-performance computing infrastructure for medical diagnosis support applications. This platform is validated by re-training a previously published deep learning model (COVID-Net) on new data, where it is shown that the performance of the model is improved through large-scale hyperparameter optimisation that uncovered optimal training parameter combinations. The per-class accuracy of the model, especially for COVID-19 and pneumonia, is higher when using the tuned hyperparameters (healthy: 96.5%; pneumonia: 61.5%; COVID-19: 78.9%) as opposed to parameters chosen through traditional methods (healthy: 93.6%; pneumonia: 46.1%; COVID-19: 76.3%). Furthermore, training speed-up analysis shows a major decrease in training time as resources increase, from 207 min using 1 node to 54 min when distributed over 32 nodes, but highlights the presence of a cut-off point where the communication overhead begins to affect performance. The developed platform is intended to provide the medical field with a technical environment for developing novel portable artificial-intelligence-based tools for diagnosis support.

8.
36th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022 ; : 338-345, 2022.
Article in English | Scopus | ID: covidwho-2018898

ABSTRACT

Teaching High-Performance Computing (HPC) to undergraduate programs represents a significant challenge in most universities in developing countries like Mexico. Deficien-cies in the required infrastructure and equipment, inadequate curricula in computer engineering programs (and resistance to change them), students' lack of interest, motivation, or knowledge of this area are the main difficulties to overcome. The COVID-19 pandemic represents an additional challenge to these difficulties in teaching HPC in these programs. Despite the detriments, some strategies have been developed to incorporate the HPC concepts to Mexican students without necessarily modifying the traditional curricula. This paper presents a case study over four public universities in Mexico based on our experience as instructors. We also propose a course that introduces the HPC principles considering the heterogeneous background of the students in such universities. The results are about the number of students enrolling in related classes and participating in extra-curricular projects. © 2022 IEEE.

9.
2022 Conference on Practice and Experience in Advanced Research Computing: Revolutionary: Computing, Connections, You, PEARC 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1986413

ABSTRACT

Anvil is a new XSEDE advanced capacity computational resource funded by NSF. Designed with a systematic strategy to meet the ever increasing and diversifying research needs for advanced computational capacity, Anvil integrates a large capacity high-performance computing (HPC) system with a comprehensive ecosystem of software, access interfaces, programming environments, and composable services in a seamless environment to support a broad range of current and future science and engineering applications of the nation's research community. Anchored by a 1000-node CPU cluster featuring the latest AMD EPYC 3rd generation (Milan) processors, along with a set of 1TB large memory and NVIDIA A100 GPU nodes, Anvil integrates a multi-tier storage system, a Kubernetes composable subsystem, and a pathway to Azure commercial cloud to support a variety of workflows and storage needs. Anvil was successfully deployed and integrated with XSEDE during the world-wide COVID-19 pandemic. Entering production operation in February 2022, Anvil will serve the nation's science and engineering research community for five years. This paper describes the Anvil system and services, including its various components and subsystems, user facing features, and shares the Anvil team's experience through its early user access program from November 2021 through January 2022. © 2022 Owner/Author.

10.
Turkish Journal of Electrical Engineering & Computer Sciences ; 30(4):1571-1585, 2022.
Article in English | Academic Search Complete | ID: covidwho-1955673

ABSTRACT

In recent years, the need for the ability to work remotely and subsequently the need for the availability of remote computer-based systems has increased substantially. This trend has seen a dramatic increase with the onset of the 2020 pandemic. Often local data is produced, stored, and processed in the cloud to remedy this flood of computation and storage needs. Historically, HPC (high performance computing) and the concept of big data have been utilized for the storage and processing of large data. However, both HPC and Hadoop can be utilized as solutions for analytical work, though the differences between these may not be obvious. Both use parallel processing techniques and offer options for data to be stored in either a centralized or distributed manner. Recent studies have focused on using a hybrid approach with both technologies. Therefore, the convergence between HPC and big data technologies can be filled with distributed computing machines at the layer described. This paper results from the motivation that there exists a necessity for a distributed computing framework that can scale from SOC (system on chip) boards to desktop computers and servers. For this purpose, in this article, we propose a distributed computing environment that can scale up to devices with heterogeneous architecture, where devices can set up clusters with resource-limited nodes and then run on top of. The solution can be thought of as a minimalist hybrid approach between HPC and big data. Within the scope of this study, not only the design of the proposed system is detailed, but also critical modules and subsystems are implemented as proof of concept. [ FROM AUTHOR] Copyright of Turkish Journal of Electrical Engineering & Computer Sciences is the property of Scientific and Technical Research Council of Turkey and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

11.
45th Jubilee International Convention on Information, Communication and Electronic Technology, MIPRO 2022 ; : 368-373, 2022.
Article in English | Scopus | ID: covidwho-1955337

ABSTRACT

Acute Respiratory Distress Syndrome (ARDS), also known as noncardiogenic pulmonary edema, is a severe condition that affects around one in ten-thousand people every year with life-threatening consequences. Its pathophysiology is characterized by bronchoalveolar injury and alveolar collapse (i.e., atelectasis), whereby its patient diagnosis is based on the so-called 'Berlin Definition'. One common practice in Intensive Care Units (ICUs) is to use lung recruitment manoeuvres (RMs) in ARDS to open up unstable, collapsed alveoli using a temporary increase in transpulmonary pressure. Many RMs have been proposed, but there is also confusion regarding the optimal way to achieve and maintain alveolar recruitment in ARDS. Therefore, the best solution to prevent lung damages by ARDS is to identify the onset of ARDS which is still a matter of research. Determining ARDS disease onset, progression, diagnosis, and treatment required algorithmic support which in turn raises the demand for cutting-edge computing power. This paper thus describes several different data science approaches to better understand ARDS, such as using time series analysis and image recognition with deep learning methods and mechanistic modelling using a lung simulator. In addition, we outline how High-Performance Computing (HPC) helps in both cases. That also includes porting the mechanistic models from serial MatLab approaches and its modular supercomputer designs. Finally, without losing sight of discussing the datasets, their features, and their relevance, we also include broader selected lessons learned in the context of ARDS out of our Smart Medical Information Technology for Healthcare (SMITH) research project. The SMITH consortium brings together technologists and medical doctors of nine hospitals, whereby the ARDS research is performed by our Algorithmic Surveillance of ICU (ASIC) patients team. The paper thus also describes how it is essential that HPC experts team up with medical doctors that usually lack the technical and data science experience and contribute to the fact that a wealth of data exists, but ARDS analysis is still slowly progressing. We complement the ARDS findings with selected insights from our Covid-19 research under the umbrella of the European Open Science Cloud (EOSC) fast track grant, a very similar application field. © 2022 Croatian Society MIPRO.

12.
Inf Serv Use ; 42(1): 117-127, 2022.
Article in English | MEDLINE | ID: covidwho-1862561

ABSTRACT

From 1992 to 1995 Donald A.B. Lindberg M.D. served concurrently as the founding director of the National Coordination Office (NCO) for High Performance Computing and Communications (HPCC) and NLM director. The NCO and its successors coordinate the Presidential-level multi-agency HPCC research and development (R&D) program called for in the High-Performance Computing Act of 1991. All large Federal science and technology R&D and applications agencies, including those involved in medical research and health care, participate in the now-30-year-old program. Lindberg's HPCC efforts built on his pioneering work in developing and applying advances in computing and networking to meet the needs of the medical research and health care communities. As part of NLM's participation in HPCC, Lindberg promoted R&D and demonstrations in telemedicine, including testbeds, medical data privacy, medical decision-making, and health education. That telemedicine technologies were ready to meet demand during the COVID-19 pandemic is testament to Lindberg's visionary leadership.

13.
Comput Struct Biotechnol J ; 20: 2558-2563, 2022.
Article in English | MEDLINE | ID: covidwho-1850922

ABSTRACT

The SARS-CoV-2 Variants of Concern tracking via Whole Genome Sequencing represents a pillar of public health measures for the containment of the pandemic. The ability to track down the lineage distribution on a local and global scale leads to a better understanding of immune escape and to adopting interventions to contain novel outbreaks. This scenario poses a challenge for NGS laboratories worldwide that are pressed to have both a faster turnaround time and a high-throughput processing of swabs for sequencing and analysis. In this study, we present an optimization of the Illumina COVID-seq protocol carried out on thousands of SARS-CoV-2 samples at the wet and dry level. We discuss the unique challenges related to processing hundreds of swabs per week such as the tradeoff between ultra-high sensitivity and negative contamination levels, cost efficiency and bioinformatics quality metrics.

14.
ACM Transactions on Parallel Computing ; 9(1), 2022.
Article in English | Scopus | ID: covidwho-1789035

ABSTRACT

The Radial Basis Function (RBF) technique is an interpolation method that produces high-quality unstructured adaptive meshes. However, the RBF-based boundary problem necessitates solving a large dense linear system with cubic arithmetic complexity that is computationally expensive and prohibitive in terms of memory footprint. In this article, we accelerate the computations of 3D unstructured mesh deformation based on RBF interpolations by exploiting the rank structured property of the matrix operator. The main idea consists in approximating the matrix off-diagonal tiles up to an application-dependent accuracy threshold. We highlight the robustness of our multiscale solver by assessing its numerical accuracy using realistic 3D geometries. In particular, we model the 3D mesh deformation on a population of the novel coronaviruses. We report and compare performance results on various parallel systems against existing state-of-the-art matrix solvers. © 2022 Association for Computing Machinery.

15.
21st Smoky Mountains Computational Sciences and Engineering Conference, SMC 2021 ; 1512 CCIS:157-172, 2022.
Article in English | Scopus | ID: covidwho-1777653

ABSTRACT

The “Force for Good” pledge of intellectual property to fight COVID-19 brought into action HPE products, resources and expertise to the problem of drug/vaccine discovery. Several scientists and technologists collaborated to accelerate efforts towards a cure. This paper documents the spirit of such a collaboration, the stellar outcomes and the technological lessons learned from the true convergence of high-performance computing (HPC), artificial intelligence (AI) and data science to fight a pandemic. The paper presents technologies that assisted in an end-to-end edge-to-supercomputer pipeline - creating 3D structures of the virus from CryoEM microscopes, filtering through large cheminformatics databases of drug molecules, using artificial intelligence and molecular docking simulations to identify drug candidates that may bind with the 3D structures of the virus, validating the binding activity using in-silico high-fidelity multi-body physics simulations, combing through millions of literature-based facts and assay data to connect-the-dots of evidence to explain or dispute the in-silico predictions. These contributions accelerated scientific discovery by: (i) identifying novel drug molecules that could reduce COVID-19 virality in the human body, (ii) screening drug molecule databases to design wet lab experiments faster and better, (iii) hypothesizing the cross-immunity of Tetanus vaccines based on comparisons of COVID-19 and publicly available protein sequences, and (iv) prioritizing drug compounds that could be repurposed for COVID-19 treatment. We present case studies around each of the aforementioned outcomes and posit an accelerated future of drug discovery in an augmented and converged workflow of data science, high-performance computing and artificial intelligence. © 2022, Springer Nature Switzerland AG.

16.
National Technical Information Service; 2021.
Non-conventional in English | National Technical Information Service | ID: grc-753692

ABSTRACT

I am excited to share with you the DoD High Performance Computing Modernization Program (HPCMP) Annual Highlights for 2019 and2020. As you will see, the HPCMP has had another outstanding 2 years of accomplishments and growth. Using the 2018 National DefenseStrategy as a guidepost, we have focused on supporting our customers from the government, industry, and academia with state-of-the-arthigh-performance computing (HPC) services and resources to help them find solutions to the most challenging science and technology(S and T), test and evaluation (T and E), and acquisition engineering problems facing our Department of Defense. With national defense prioritiesshifting to potential conflict with near-peer adversaries like China and Russia, the HPCMP ecosystem that has been developed for the past25 years has proven it is ideally suited to these challenges.

17.
National Technical Information Service; 2020.
Non-conventional in English | National Technical Information Service | ID: grc-753555

ABSTRACT

Chronic low back pain constitutes the major form of chronic pain, with a prevalence as high as 70-85 percent in adults at some time in their lives. This 26-week, double blind, randomized, placebo controlled two-arm parallel-group study will evaluate 244 participants to determine if treatment with d-cycloserine in individuals with chronic, refractory low back pain will demonstrate greater reduction in pain compared to individuals treated with placebo. After a two-week screening period, individuals are randomized to receive either 12 weeks of d-cycloserine or placebo and then followed for an additional 12 weeks to evaluate persistence of benefit at study endpoint, 24 weeks after randomization. Follow-up visits and data collection will occur at baseline and 2, 6, 12, and 24 weeks after randomization to assess general health, pain, proper treatment use, and side effects. Pain and safety will also be assessed at 16 and 20 weeks after randomization by phone calls.

18.
2021 IEEE International Conference on Big Data, Big Data 2021 ; : 1566-1574, 2021.
Article in English | Scopus | ID: covidwho-1730887

ABSTRACT

We study the role of vaccine acceptance in controlling the spread of COVID-19 in the US using AI-driven agent-based models. Our study uses a 288 million node social contact network spanning all 50 US states plus Washington DC, comprised of 3300 counties, with 12.59 billion daily interactions. The highly-resolved agent-based models use realistic information about disease progression, vaccine uptake, production schedules, acceptance trends, prevalence, and social distancing guidelines. Developing a national model at this resolution that is driven by realistic data requires a complex scalable workflow, model calibration, simulation, and analytics components. Our workflow optimizes the total execution time and helps in improving overall human productivity.This work develops a pipeline that can execute US-scale models and associated workflows that typically present significant big data challenges. Our results show that, when compared to faster and accelerating vaccinations, slower vaccination rates due to vaccine hesitancy cause averted infections to drop from 6.7M to 4.5M, and averted total deaths to drop from 39.4K to 28.2K nationwide. This occurs despite the fact that the final vaccine coverage is the same in both scenarios. Improving vaccine acceptance by 10% in all states increases averted infections from 4.5M to 4.7M (a 4.4% improvement) and total deaths from 28.2K to 29.9K (a 6% increase) nationwide. The analysis also reveals interesting spatio-temporal differences in COVID-19 dynamics as a result of vaccine acceptance. To our knowledge, this is the first national-scale analysis of the effect of vaccine acceptance on the spread of COVID-19, using detailed and realistic agent-based models. © 2021 IEEE.

19.
Association for Computing Machinery. Communications of the ACM ; 65(2):18, 2022.
Article in English | ProQuest Central | ID: covidwho-1710424

ABSTRACT

ACM has named a 14-person team from Chinese institutions as recipients of the 2021 ACM Gordon Bell Prize for their project, Closing the "Quantum Supremacy" Gap: Achieving Real-Time Simulation of a Random Quantum Circuit Using a New Sunway Supercomputer. The ACM Gordon Bell Prize tracks the progress of parallel computing and rewards innovation in applying high-performance computing to challenges in science, engineering, and large-scale data analytics. In addition, ACM awarded the 2021 ACM Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research to a six-member team from Japan for their project Digital transformation of droplet/ aerosol infection risk assessment realized on "Fugaku" for the fight against COVID-19.

20.
32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1697952

ABSTRACT

Aviation is undergoing a transformation, fueled by the Corona pandemic, and methods to evaluate new technologies for more economical and environment-friendly flight in a timelier manner and to enable new aircraft to be designed (almost) exclusively using computers are sought after. The DLR project Victoria brings together disciplinary methods and tools of different fidelity for collaborative multidisciplinary design optimization (MDO) of long-range passenger aircraft configurations, necessitating the use of high-performance computing. Three different approaches are being followed to master complex interactions of disciplines and software aspects: an integrated aero-structural wing optimization based on high-fidelity methods, a multi-fidelity gradient-based approach capable of efficiently dealing with many design parameters and many load cases, and a many-discipline highly-parallel approach, which is a novel approach towards computationally demanding and collaboration intensive MDO. The XRF-1, an Airbus provided research aircraft configuration representing a typical long-range wide-body aircraft, is used as a common test case to demonstrate the different MDO strategies. Parametric disciplinary models are used in terms of overall aircraft design synthesis, loads analysis, flutter, structural analysis and optimization, engine design, and aircraft performance. The different MDO strategies are shown to be effective in dealing with complex, real-world MDO problems in a highly collaborative, cross-institutional design environment, involving many disciplinary groups and experts and a mix of commercial and in-house design and analysis software. © 2021 32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021. All rights reserved.

SELECTION OF CITATIONS
SEARCH DETAIL